In recent years, vision-centric perception has flourished in various autonomous driving tasks, including 3D detection, semantic map construction, motion forecasting, and depth estimation. Nevertheless, the latency of vision-centric approaches is too high for practical deployment (e.g., most camera-based 3D detectors have a runtime greater than 300ms). To bridge the gap between ideal research and real-world applications, it is necessary to quantify the trade-off between performance and efficiency. Traditionally, autonomous-driving perception benchmarks perform the offline evaluation, neglecting the inference time delay. To mitigate the problem, we propose the Autonomous-driving StreAming Perception (ASAP) benchmark, which is the first benchmark to evaluate the online performance of vision-centric perception in autonomous driving. On the basis of the 2Hz annotated nuScenes dataset, we first propose an annotation-extending pipeline to generate high-frame-rate labels for the 12Hz raw images. Referring to the practical deployment, the Streaming Perception Under constRained-computation (SPUR) evaluation protocol is further constructed, where the 12Hz inputs are utilized for streaming evaluation under the constraints of different computational resources. In the ASAP benchmark, comprehensive experiment results reveal that the model rank alters under different constraints, suggesting that the model latency and computation budget should be considered as design choices to optimize the practical deployment. To facilitate further research, we establish baselines for camera-based streaming 3D detection, which consistently enhance the streaming performance across various hardware. ASAP project page: https://github.com/JeffWang987/ASAP.
translated by 谷歌翻译
Artificial intelligence is to teach machines to take actions like humans. To achieve intelligent teaching, the machine learning community becomes to think about a promising topic named machine teaching where the teacher is to design the optimal (usually minimal) teaching set given a target model and a specific learner. However, previous works usually require numerous teaching examples along with large iterations to guide learners to converge, which is costly. In this paper, we consider a more intelligent teaching paradigm named one-shot machine teaching which costs fewer examples to converge faster. Different from typical teaching, this advanced paradigm establishes a tractable mapping from the teaching set to the model parameter. Theoretically, we prove that this mapping is surjective, which serves to an existence guarantee of the optimal teaching set. Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one. Extensive experiments verify the efficiency of our strategy and further demonstrate the intelligence of this new teaching paradigm.
translated by 谷歌翻译
Contrastive Language-Image Pre-trained (CLIP) models have zero-shot ability of classifying an image belonging to "[CLASS]" by using similarity between the image and the prompt sentence "a [CONTEXT] of [CLASS]". Based on exhaustive text cues in "[CONTEXT]", CLIP model is aware of different contexts, e.g. background, style, viewpoint, and exhibits unprecedented robustness against a wide range of distribution shifts. However, recent works find further fine-tuning of CLIP models improves accuracy but sacrifices the robustness on downstream tasks. We conduct an empirical investigation to show fine-tuning will corrupt the context-aware ability of pre-trained CLIP features. To solve this problem, we propose Context-Aware Robust Fine-tuning (CAR-FT). CAR-FT regularizes the model during fine-tuning to capture the context information. Specifically, we use zero-shot prompt weights to get the context distribution contained in the image. By minimizing the Kullback-Leibler Divergence (KLD) between context distributions induced by original/fine-tuned CLIP models, CAR-FT makes the context-aware ability of CLIP inherited into downstream tasks, and achieves both higher In-Distribution (ID) and Out-Of-Distribution (OOD) accuracy. The experimental results show CAR-FT achieves superior robustness on five OOD test datasets of ImageNet, and meanwhile brings accuracy gains on nine downstream tasks. Additionally, CAR-FT surpasses previous Domain Generalization (DG) methods and gets 78.5% averaged accuracy on DomainBed benchmark, building the new state-of-the-art.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
对抗性训练(AT)通常被认为是防御对抗性例子的最有效的方法之一,可能会在很大程度上损害标准绩效,因此对工业规模的生产和应用的有用性有限。令人惊讶的是,这种现象在自然语言处理(NLP)任务中完全相反,在该任务中甚至可以从中受益。我们注意到NLP任务中AT的优点可能来自离散和符号输入空间。为了借用NLP风格的优势,我们提出了离散的对抗训练(DAT)。 DAT利用VQGAN改革图像数据以离散类似文本的输入,即视觉单词。然后,它可以最大程度地减少这种离散图像的最大风险,并具有符号对抗扰动。我们从分布的角度进一步提供了解释,以证明DAT的有效性。作为增强视觉表示的插件技术,DAT可以在多个任务上取得重大改进,包括图像分类,对象检测和自我监督学习。尤其是,该模型通过胶带自动编码(MAE)预先训练并由我们的DAT进行微调,而没有额外的数据可以在Imagenet-C上获得31.40 MCE,并且在Stylized-Imagenet上进行了32.77%的TOP-1准确性,建立了新的状态 - 艺术。该代码将在https://github.com/alibaba/easyrobust上找到。
translated by 谷歌翻译
这项研究提出了一种基于深度学习的超声(US)图像引导放射疗法的跟踪方法。拟议的级联深度学习模型由注意力网络,基于掩模区域的卷积神经网络(Mask R-CNN)和长期短期记忆(LSTM)网络组成。注意网络从美国图像到可疑的具有里程碑意义的运动区域,以减少搜索区域。然后,面膜R-CNN在减少区域中产生多个利益区域(ROI)建议,并通过三个网络头确定拟议的地标:边界框回归,提案分类和地标分段。 LSTM网络对连续的图像框架之间的时间关系建模,以进行边界框回归和建议分类。为了合并最终建议,根据顺序框架之间的相似性设计选择方法。该方法在肝脏美国跟踪数据集中测试了医疗图像计算和计算机辅助干预措施(MICCAI)2015年的挑战,其中有三位经验丰富的观察者注释了地标,以获得其平均位置。在24个鉴于我们具有地面真相的序列的24个序列上,所有地标的平均跟踪误差为0.65 +/- 0.56毫米,所有地标的误差均在2 mm之内。我们进一步测试了从测试数据集中的69个地标上提出的模型,该模型具有与训练模式相似的图像模式,从而导致平均跟踪误差为0.94 +/- 0.83 mm。我们的实验结果表明,我们提出的方法使用US图像跟踪肝解剖学地标的可行性和准确性,为放射治疗期间的主动运动管理提供了潜在的解决方案。
translated by 谷歌翻译
在多种方案中,多幕科建议专门为用户检索相关项目,这在工业推荐系统中无处不在。这些方案享有用户和项目中的一部分重叠,而不同方案的分布则不同。多阶段建模的关键点是有效地最大程度地利用全幕纳罗来信息,并在多种情况下为用户和项目生成适应性表示。我们总结了三个实用挑战,这些挑战无法很好地解决多幕科建模:(1)在多种情况下缺乏细粒度和脱钩的信息传输控制。 (2)整个空间样品的开发不足。 (3)项目的多幕科代表性分解问题。在本文中,我们提出了一种情景自适应和自我监督(SASS)模型,以解决上述三个挑战。具体而言,我们使用场景自适应门单元设计了多层场景自适应转移(ML-SAT)模块,以相当细粒度且脱钩的方式选择并融合从整个场景到单个场景的有效传输信息。为了充分利用整个空间样品的功能,引入了包括预训练和微调在内的两阶段训练过程。预训练阶段是基于场景监督的对比学习任务,并从标记和未标记的数据空间中绘制的培训样本。该模型是在用户端和项目方面对称创建的,因此我们可以在不同情况下获得项目的区分表示。公共和工业数据集的广泛实验结果证明了SASS模型比最先进的方法的优越性。该模型还可以在在线A/B测试中平均每位用户的观看时间提高8.0%以上。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
多标签图像分类旨在预测图像中的所有可能标签。考虑到在每个培训图像中注释所有标签可能是昂贵的,通常将其作为部分标签的学习问题。关于部分标签学习的现有作品集中在每个训练图像只有其标签的子集注释的情况下。一种特殊情况是在每个训练图像中仅注释一个正标签。为了进一步减轻注释负担并增强了分类器的性能,本文提出了一个新的部分标签设置,其中仅标记了训练图像的一个子集,每个图像只有一个正面标签,而其余的培训图像仍保留未标记。为了处理这个新设置,我们建议一个端到端的深层网络PLMCL(部分标签动量课程学习),可以学会为部分标记和未标记的培训图像生成自信的伪标签。基于动量的新法律通过考虑更新伪标签的速度,更新每个训练图像上的软伪标签,这些标签的更新有助于避免捕获到低信心的本地最低限度,尤其是在培训的早期阶段,由于缺乏观察到的标签和培训的早期阶段对伪标签的信心。此外,我们还提出了一个信心的调度程序,以适应性地对不同标签进行易于锻炼的学习。广泛的实验表明,我们提出的PLMCL在三个不同数据集上的各个部分标签设置下优于许多最先进的多标签分类方法。
translated by 谷歌翻译
包含多种类型的节点和边缘的异质图在各种领域都普遍存在,包括书目网络,社交媒体和知识图。作为分析异质图的基本任务,相关度量旨在计算不同类型的两个对象之间的相关性,这些对象已在许多应用程序中使用,例如Web搜索,建议和社区检测。大多数现有的相关性措施都集中在对象具有相同类型的均质网络上,并为异质图制定了一些措施,但它们通常需要预定义的元路径。定义有意义的元路径需要大量的领域知识,这在很大程度上限制了其应用,尤其是在诸如知识图之类的图形富含模式的异质图上。最近,图形神经网络(GNN)已被广泛应用于许多图挖掘任务,但尚未用于测量相关性。为了解决上述问题,我们提出了一种基于GNN的新型相关性措施,即GSIM。具体而言,我们首先是理论上分析的,并表明GNN有效地测量图中节点的相关性。然后,我们建议基于上下文路径的图形神经网络(CP-GNN)自动利用异质图中的语义。此外,我们利用CP-GNN来支持任何类型的两个对象之间的相关性度量。广泛的实验表明,GSIM优于现有措施。
translated by 谷歌翻译